Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
7th IEEE International conference for Convergence in Technology, I2CT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1992603

ABSTRACT

This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model's decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model. © 2022 IEEE.

2.
2021 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2021 ; : 3408-3415, 2021.
Article in English | Scopus | ID: covidwho-1705183

ABSTRACT

Analysis of irregularities in Covid-19 data could open a new window to learn more about the unprecedented problems of the current global pandemic. Of many, radiographs and clinical records are reliable sources for viral infection investigation and treatment planning. Clinical records help track the Covid-19 pandemic. In this paper, we present a Spike Neural Network (SNN) with supervised synaptic learning to detect abnormalities in Chest X-rays (CXRs) In other words, the proposed SNN can distinguish Covid-19 positive cases from healthy ones. In our decision-making procedure, we introduce clinical practice so Explainable AI (XAI) is possible to carry out. In addition, Support Vector Machine (SVM) with local interpretable model-agnostic explanation (LIME) provides reliable analysis of abnormalities in Covid-19 clinical data. © 2021 IEEE.

3.
2021 IEEE International Conference on Internet of Things and Intelligence Systems, IoTaIS 2021 ; : 98-103, 2021.
Article in English | Scopus | ID: covidwho-1672790

ABSTRACT

In this paper, we develop a framework for lung disease identification from chest X-ray images by differentiating the novel coronavirus disease (COVID-19) or other disease-induced lung opacity samples from normal cases. We perform image processing tasks, segmentation, and train a customized Convolutional Neural Network (CNN) that obtains reasonable performance in terms of classification accuracy. To address the black-box nature of this complex classification model, which emerged as a key barrier to applying such Artificial Intelligence (AI)-based methods for automating medical decisions raising skepticism among clinicians, we address the need to quantitatively interpret the performance of our adopted approach using a Layer-wise Relevance Propagation (LRP)-based method. We also used a pixel flipping-based, robust performance metric to evaluate the explainability of our adopted LRP method and compare its performance with other explainable methods, such as Local Interpretable Model Agnostic Explanation (LIME), Guided Backpropagation (GB), and Deep Taylor Decomposition (DTD). © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL